变形金刚在序列建模及以后取得了显着的成功,但相对于输入序列的长度,二次计算和记忆复杂性遭受了损失。利用技术包括稀疏和线性的注意力和哈希技巧;已经提出了有效的变压器来降低变压器的二次复杂性,但会显着降低准确性。作为响应,我们首先将计算注意图的线性注意力和残差连接解释为梯度下降步骤。然后,我们将动量引入这些组件,并提出\ emph {动量变压器},该动量利用动量来提高线性变压器的精度,同时保持线性内存和计算复杂性。此外,我们制定了一种自适应策略,以根据二次优化的最佳动量计算模型的动量值。这种自适应动量消除了寻找最佳动量值的需求,并进一步增强了动量变压器的性能。包括图像生成和机器翻译在内的自回归和非自动回归任务的一系列实验表明,动量变压器在训练效率和准确性方面优于流行的线性变压器。
translated by 谷歌翻译
物理知识的神经网络(PINN)将问题领域的物理知识作为对损失函数的软限制,但最近的工作表明这可能导致优化困难。在这里,我们研究了搭配点的位置对这些模型训练性的影响。我们发现,随着训练的进行,可以通过适应搭配点的位置来显着提高香草·皮恩的性能。具体而言,我们提出了一种新型的自适应搭配方案,该方案逐渐将更多的搭配点(不增加数量)分配给模型正在造成更高误差的区域(基于域中损失函数的梯度)。加上在任何优化失速过程中对训练的明智重新启动(通过简单地重新采样搭配点以调整损失景观)会导致预测错误的更好估计。我们提出了一些问题的结果,包括具有不同强迫函数的2D泊松和扩散 - 辅助系统。我们发现,针对这些问题的训练香草PINN可以导致解决方案中的预测误差高达70%,尤其是在低搭配点的状态下。相比之下,我们的自适应方案可以达到较小误差的顺序,其计算复杂性与基线相似。此外,我们发现自适应方法始终如一地执行PAR或比香草Pinn方法稍好,即使对于大型搭配点方案也是如此。所有实验的代码都是开源的。
translated by 谷歌翻译
多保真建模和学习在与物理模拟相关的应用中很重要。它可以利用低保真性和高保真示例进行培训,以降低数据生成成本,同时仍然达到良好的性能。尽管现有方法仅模型有限,离散的保真度,但实际上,忠诚度的选择通常是连续且无限的,这可以对应于连续的网格间距或有限元元素长度。在本文中,我们提出了无限的保真度核心化(IFC)。鉴于数据,我们的方法可以在连续无限的保真度中提取和利用丰富的信息来增强预测准确性。我们的模型可以插值和/或推断出对新型保真度的预测,甚至可以高于训练数据的保​​真度。具体而言,我们引入了一个低维的潜在输出作为保真度和输入的连续函数,并具有带有基矩阵的多个IT以预测高维解决方案输出。我们将潜在输出建模为神经普通微分方程(ODE),以捕获内部的复杂关系并在整个连续保真度中整合信息。然后,我们使用高斯工艺或其他颂歌来估计忠诚度变化的碱基。为了有效的推断,我们将碱基重组为张量,并使用张量 - 高斯变异后部为大规模输出开发可扩展的推理算法。我们在计算物理学的几个基准任务中展示了我们的方法的优势。
translated by 谷歌翻译
最近在科学机器学习的工作已经开发出所谓的物理信息的神经网络(Pinn)模型。典型方法是将物理域知识纳入经验丢失功能的软限制,并使用现有的机器学习方法来培训模型。我们展示了,虽然现有的Pinn方法可以学习良好的模型,但它们可以轻松地未能学习相关的物理现象,甚至更复杂的问题。特别是,我们分析了众多不同的普遍物理兴趣的情况,包括使用对流,反应和扩散运营商学习微分方程。我们提供了证据表明Pinns中的软正规化,涉及基于PDE的差分运营商,可以引入许多微妙的问题,包括使问题更加不良。重要的是,我们表明,这些可能的失败模式不是由于NN架构中缺乏富有效力,但Pinn的设置使得损失景观很难优化。然后,我们描述了两个有希望的解决方案来解决这些故障模式。第一种方法是使用课程正则化,其中Pinn的丢失项从简单的PDE正则化开始,并且随着NN训练而变得逐渐变得更加复杂。第二种方法是将问题构成为序列到序列的学习任务,而不是学习一次性地预测整个时空。广泛的测试表明,与常规Pinn训练相比,我们可以通过这些方法实现最多1-2个数量级。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Machine learning (ML) has found broad applicability in quantum information science in topics as diverse as experimental design, state classification, and even studies on quantum foundations. Here, we experimentally realize an approach for defining custom prior distributions that are automatically tuned using ML for use with Bayesian quantum state estimation methods. Previously, researchers have looked to Bayesian quantum state tomography due to its unique advantages like natural uncertainty quantification, the return of reliable estimates under any measurement condition, and minimal mean-squared error. However, practical challenges related to long computation times and conceptual issues concerning how to incorporate prior knowledge most suitably can overshadow these benefits. Using both simulated and experimental measurement results, we demonstrate that ML-defined prior distributions reduce net convergence times and provide a natural way to incorporate both implicit and explicit information directly into the prior distribution. These results constitute a promising path toward practical implementations of Bayesian quantum state tomography.
translated by 谷歌翻译
Despite the impact of psychiatric disorders on clinical health, early-stage diagnosis remains a challenge. Machine learning studies have shown that classifiers tend to be overly narrow in the diagnosis prediction task. The overlap between conditions leads to high heterogeneity among participants that is not adequately captured by classification models. To address this issue, normative approaches have surged as an alternative method. By using a generative model to learn the distribution of healthy brain data patterns, we can identify the presence of pathologies as deviations or outliers from the distribution learned by the model. In particular, deep generative models showed great results as normative models to identify neurological lesions in the brain. However, unlike most neurological lesions, psychiatric disorders present subtle changes widespread in several brain regions, making these alterations challenging to identify. In this work, we evaluate the performance of transformer-based normative models to detect subtle brain changes expressed in adolescents and young adults. We trained our model on 3D MRI scans of neurotypical individuals (N=1,765). Then, we obtained the likelihood of neurotypical controls and psychiatric patients with early-stage schizophrenia from an independent dataset (N=93) from the Human Connectome Project. Using the predicted likelihood of the scans as a proxy for a normative score, we obtained an AUROC of 0.82 when assessing the difference between controls and individuals with early-stage schizophrenia. Our approach surpassed recent normative methods based on brain age and Gaussian Process, showing the promising use of deep generative models to help in individualised analyses.
translated by 谷歌翻译
Previous work has shown that a neural network with the rectified linear unit (ReLU) activation function leads to a convex polyhedral decomposition of the input space. These decompositions can be represented by a dual graph with vertices corresponding to polyhedra and edges corresponding to polyhedra sharing a facet, which is a subgraph of a Hamming graph. This paper illustrates how one can utilize the dual graph to detect and analyze adversarial attacks in the context of digital images. When an image passes through a network containing ReLU nodes, the firing or non-firing at a node can be encoded as a bit ($1$ for ReLU activation, $0$ for ReLU non-activation). The sequence of all bit activations identifies the image with a bit vector, which identifies it with a polyhedron in the decomposition and, in turn, identifies it with a vertex in the dual graph. We identify ReLU bits that are discriminators between non-adversarial and adversarial images and examine how well collections of these discriminators can ensemble vote to build an adversarial image detector. Specifically, we examine the similarities and differences of ReLU bit vectors for adversarial images, and their non-adversarial counterparts, using a pre-trained ResNet-50 architecture. While this paper focuses on adversarial digital images, ResNet-50 architecture, and the ReLU activation function, our methods extend to other network architectures, activation functions, and types of datasets.
translated by 谷歌翻译
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
translated by 谷歌翻译
Bayesian Optimization is a useful tool for experiment design. Unfortunately, the classical, sequential setting of Bayesian Optimization does not translate well into laboratory experiments, for instance battery design, where measurements may come from different sources and their evaluations may require significant waiting times. Multi-fidelity Bayesian Optimization addresses the setting with measurements from different sources. Asynchronous batch Bayesian Optimization provides a framework to select new experiments before the results of the prior experiments are revealed. This paper proposes an algorithm combining multi-fidelity and asynchronous batch methods. We empirically study the algorithm behavior, and show it can outperform single-fidelity batch methods and multi-fidelity sequential methods. As an application, we consider designing electrode materials for optimal performance in pouch cells using experiments with coin cells to approximate battery performance.
translated by 谷歌翻译